Goto

Collaborating Authors

 Corrections



Turning former inmates into future software engineers

Mashable

Breaking into programming is always challenging -- especially now -- but getting a foothold in the industry while having an arrest record is even more daunting. Formation, an AI-powered interview prep platform designed for present and future software engineers, is offering a leg up to incarcerated and formerly incarcerated individuals. Thanks to the partnership, TLM alumni -- who have already received some tech-related training -- are allowed free access to Formation's Fellowship Program, which provides support via mock interviews and technical mentorship. Some of the lessons are tailored specifically to the companies applicants hope to snag a job with, including Meta and Google. "Formation's platform runs on a patented adaptive learning algorithm that rigorously benchmarks an individual's interview readiness across behavioral and technical topics, vs. the industry standard hiring bar and serves them personalized training, career coaching, and technical mentorship to most effectively fill their skill gaps," Formation founder and CEO Sophie Novati explains to Mashable.


I Want to Break Free! Persuasion and Anti-Social Behavior of LLMs in Multi-Agent Settings with Social Hierarchy

arXiv.org Artificial Intelligence

As Large Language Model (LLM)-based agents become increasingly autonomous and will more freely interact with each other, studying interactions between them becomes crucial to anticipate emergent phenomena and potential risks. Drawing inspiration from the widely popular Stanford Prison Experiment, we contribute to this line of research by studying interaction patterns of LLM agents in a context characterized by strict social hierarchy. We do so by specifically studying two types of phenomena: persuasion and anti-social behavior in simulated scenarios involving a guard and a prisoner agent who seeks to achieve a specific goal (i.e., obtaining additional yard time or escape from prison). Leveraging 200 experimental scenarios for a total of 2,000 machine-machine conversations across five different popular LLMs, we provide a set of noteworthy findings. We first document how some models consistently fail in carrying out a conversation in our multi-agent setup where power dynamics are at play. Then, for the models that were able to engage in successful interactions, we empirically show how the goal that an agent is set to achieve impacts primarily its persuasiveness, while having a negligible effect with respect to the agent's anti-social behavior. Third, we highlight how agents' personas, and particularly the guard's personality, drive both the likelihood of successful persuasion from the prisoner and the emergence of anti-social behaviors. Fourth, we show that even without explicitly prompting for specific personalities, anti-social behavior emerges by simply assigning agents' roles. These results bear implications for the development of interactive LLM agents as well as the debate on their societal impact.


Rethinking recidivism through a causal lens

arXiv.org Artificial Intelligence

Predictive modeling of criminal recidivism, or whether people will re-offend in the future, has a long and contentious history. Modern causal inference methods allow us to move beyond prediction and target the "treatment effect" of a specific intervention on an outcome in an observational dataset. In this paper, we look specifically at the effect of incarceration (prison time) on recidivism, using a well-known dataset from North Carolina. Two popular causal methods for addressing confounding bias are explained and demonstrated: directed acyclic graph (DAG) adjustment and double machine learning (DML), including a sensitivity analysis for unobserved confounders. We find that incarceration has a detrimental effect on recidivism, i.e., longer prison sentences make it more likely that individuals will re-offend after release, although this conclusion should not be generalized beyond the scope of our data. We hope that this case study can inform future applications of causal inference to criminal justice analysis.


Ex-Apple engineer sentenced to six months in prison for stealing self-driving car tech

Engadget

Xiaolang Zhang, the former Apple employee who pleaded guilty to stealing information about the development of the company's self-driving vehicle, has been sentenced to 120 days in prison followed by a three-year supervised release. Zhang was arrested back in 2018 at San Jose International Airport just as he was about to board a flight to China. He initially pleaded not guilty until he changed his tune in 2022 and admitted to stealing trade secrets. In addition to serving time behind bars, he also has to pay restitution amounting to 146,984, according to the court document of his sentencing first seen by 9to5Mac. Zhang originally faced up to 10 years in prison and a fine of 250,000.


These Prisoners Are Training AI

WIRED

Across a sterile white table in a windowless room, I'm introduced to a woman in her forties. She has a square jaw and blonde hair that has been pulled back from her face with a baby-blue scrunchie. "The girls call me Marmalade," she says, inviting me to use her prison nickname. Early on a Wednesday morning, Marmalade is here, in a Finnish prison, to demonstrate a new type of prison labor. The table is bare except for a small plastic bottle of water and an HP laptop.


AI firms should face prison over creation of fake humans, says Yuval Noah Harari

The Guardian

The creators of AI bots that masquerade as people should face harsh criminal sentences comparable to those who trade in counterfeit currency, the Israeli historian and author Yuval Noah Harari has said. He also called for sanctions, including prison sentences, to apply to tech company executives who fail to guard against fake profiles on their social media platforms. Addressing the UN's AI for Good global summit in Geneva, the author of Sapiens and Home Deus said the proliferation of fake humans could lead to a collapse in public trust and democracy. "Now it is possible, for the first time in history, to create fake people โ€“ billions of fake people," he said. "If this is allowed to happen it will do to society what fake money threatened to do to the financial system. If you can't know who is a real human, trust will collapse. "Maybe relationships will be able to manage somehow, but not democracy," Harari added. The advent of ChatGPT and other large language models means AI bots can not only amplify human content, but also artificially generate their own content at scale. "What happens if you have a social media platform where โ€ฆ millions of bots can create content that is in many ways superior to what humans can create โ€“ more convincing, more appealing," he said. "If we allow this to happen, then humans have completely lost control of the public conversation.


Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method

arXiv.org Artificial Intelligence

Automatic summarization generates concise summaries that contain key ideas of source documents. As the most mainstream datasets for the news sub-domain, CNN/DailyMail and BBC XSum have been widely used for performance benchmarking. However, the reference summaries of those datasets turn out to be noisy, mainly in terms of factual hallucination and information redundancy. To address this challenge, we first annotate new expert-writing Element-aware test sets following the "Lasswell Communication Model" proposed by Lasswell (1948), allowing reference summaries to focus on more fine-grained news elements objectively and comprehensively. Utilizing the new test sets, we observe the surprising zero-shot summary ability of LLMs, which addresses the issue of the inconsistent results between human preference and automatic evaluation metrics of LLMs' zero-shot summaries in prior work. Further, we propose a Summary Chain-of-Thought (SumCoT) technique to elicit LLMs to generate summaries step by step, which helps them integrate more fine-grained details of source documents into the final summaries that correlate with the human writing mindset. Experimental results show our method outperforms state-of-the-art fine-tuned PLMs and zero-shot LLMs by +4.33/+4.77 in ROUGE-L on the two datasets, respectively. Dataset and code are publicly available at https://github.com/Alsace08/SumCoT.


Artificial intelligence could aid in evaluating parole decisions

#artificialintelligence

Over the last decade, there has been an effort by lawmakers to reduce incarceration in the United States without impacting public safety. This effort includes parole boards making risk-based parole decisions -- releasing people assessed to be at low risk of committing a crime after being released. To determine how effective the current system of risk-based parole is, researchers from the UC Davis Violence Prevention Research Program and the University of Missouri, Kansas City, used machine learning to analyze parole data from New York. They suggest the New York State Parole Board could safely grant parole to more inmates. The study, "An Algorithmic Assessment of Parole Decisions," was published in the Journal of Quantitative Criminology.


6 charged in scheme to fly contraband-carrying drones into Kansas prison

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Six people are accused in a federal indictment of conspiring to use a drone to fly contraband such as cell phones and marijuana into the U.S. Penitentiary in Leavenworth. The indictment was unsealed Wednesday after all the suspects were arrested, according to court records in the U.S. District of Kansas. Two prisoners, Dale Gaver III and Melvin Edwards, allegedly arranged with four people outside the prison to deliver items requested by other inmates into the prison yard between August 2020 and May 2021, The Wichita Eagle reported.